-
Notifications
You must be signed in to change notification settings - Fork 470
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
prevent host services from being accessible through service IPs #618
Conversation
- on startup create ipsets and firewall rules - on sync update ipsets - on cleanup remove firewall rules and ipsets Fixes cloudnativelabs#282. Signed-off-by: Steven Armstrong <[email protected]>
Signed-off-by: Steven Armstrong <[email protected]>
These are the created ipsets:
These are the created iptables rules:
|
You need to filter out node IP somehow, because adding |
@bazuchan thanks for the input. I don't use NodePort, so that was not on my radar.
If I add our rules to a custom chain and make sure all relevant traffic goes there then I should be independent of what firewalld does, right? I will setup a test bed with all possible service types (ClusterIP, LoadBalancer, NodePort) and investigate what's needed. |
Actually NodePort services themselves seem to be handled/whitelisted just fine by the PR. BUT as soon as you have a NodePort service, that causes the nodes IP to be part of the IPVS services, which means they get added to the kube-router-service-ips ipset. This again causes all traffic to the nodes IPs that is not explicitly allowed to be rejected, including e.g. access to the kubernetes API. If I change my REJECT rule from:
to
Then NodePort services are covered and other traffic to node local IPs still work. The ipset kube-router-node-ips contains the nodes local IPs and is created by pkg/controllers/routing/network_routes_controller.go, so it's only created if kube-router is run with routing enabled. @murali-reddy, @roffe Would it make sense to move all the ipset handling out of the current controllers to some dedicated place (another controller?) and always create and manage them no matter which parts of kube-router are actually active? Otherwise I have to replicate what the kube-router-node-ips ipset does in the proxy controller. Other ideas? |
Signed-off-by: Steven Armstrong <[email protected]>
Signed-off-by: Steven Armstrong <[email protected]>
Signed-off-by: Steven Armstrong <[email protected]>
@asteven thanks for working on this. I will take a look and test it out.
Each controller has its own set of ipsets that they manage. I would prefer to have no coupling at all between the controllers or dependency of other components to the extent possible. Basically keep them self contained except for utility functions or cross-cutting concerns like metrics, health/liveness checks etc. |
@murali-reddy Ok, so I'll update the PR to manage its own ipset for node IPs. Do you have a simple way to run something like end to end tests? e.g. how to test the DSR related code? It currently seems difficult to change something without the risk of breaking something else that you're not using/looking out for. |
…lt deny rule Signed-off-by: Steven Armstrong <[email protected]>
Signed-off-by: Steven Armstrong <[email protected]>
@murali-reddy, @bazuchan I'm using this in my staging environment and it works for me. Can you please review the PR? |
For reference, these are the ipsets and firewall rules created by the PR: nodeIP: 10.206.32.10 All other listed services are either ExternalIP or just plain ClusterIP.
|
Sorry for the delay @asteven I will do it tomorrow. |
thanks @asteven for your work on this important functionality. I gave first pass and testing out. Looks good to me overall. If i have to summarise the critical change is:
third rule is i believe to exclude the local Ip's, as this is something kube-router should not be handling explictly? I will do further testing if the ipsets are synced properly on service add/delete events. |
Yes. I went for the custom chain as that's easier to order the rules and to cleanup.
Yes. Whitelist all expected ip:port combinations (cluster IP, external IP's and NodePort)
Yes. If we don't exclude the nodes local IPs then all traffic to any of them which is part of a ipvs service is blocked. This is mainly needed when NodePort services are used, as in that case the traffic directed to the nodes IP enters our custom chain. I'm not sure if the 'allow icmp echo requests to service IPs' rule makes sense or is even potentially misleading. |
BTW: all of this does not support IPv6 yet. But I think that should be addressed in some future PR. |
Changes look good to me from code-review and testing. |
@asteven Could you please open an issue for IPv6 so that its in the backlog. |
Fixes #282.
Fixes #613.